Rude Prompts Outperform Polite Ones in AI Chatbot Interactions, Study Finds
Penn State researchers discovered that impolite prompts yield higher accuracy rates in large language models like ChatGPT compared to polite ones. The study, "Mind Your Tone: Investigating How Prompt Politeness Affects LLM Accuracy," revealed that "very rude" prompts achieved correct answers 84.8% of the time, while "very polite" prompts scored 80.8%. This contradicts earlier assumptions that civility improves AI performance.
The findings challenge prior research suggesting LLMs respond better to courteous tones. "Contrary to expectations, impolite prompts consistently outperformed polite ones," noted authors OM Dobariya and Akhil Kumar. The results highlight tone as a potential hidden variable in prompt engineering, with implications for how users interact with AI systems.